2,045 research outputs found

    Perception of nonnative tonal contrasts by Mandarin-English and English-Mandarin sequential bilinguals

    Full text link
    This study examined the role of acquisition order and crosslinguistic similarity in influencing transfer at the initial stage of perceptually acquiring a tonal third language (L3). Perception of tones in Yoruba and Thai was tested in adult sequential bilinguals representing three different first (L1) and second language (L2) backgrounds: L1 Mandarin-L2 English (MEBs), L1 English-L2 Mandarin (EMBs), and L1 English-L2 intonational/non-tonal (EIBs). MEBs outperformed EMBs and EIBs in discriminating L3 tonal contrasts in both languages, while EMBs showed a small advantage over EIBs on Yoruba. All groups showed better overall discrimination in Thai than Yoruba, but group differences were more robust in Yoruba. MEBs’ and EMBs’ poor discrimination of certain L3 contrasts was further reflected in the L3 tones being perceived as similar to the same Mandarin tone; however, EIBs, with no knowledge of Mandarin, showed many of the same similarity judgments. These findings thus suggest that L1 tonal experience has a particularly facilitative effect in L3 tone perception, but there is also a facilitative effect of L2 tonal experience. Further, crosslinguistic perceptual similarity between L1/L2 and L3 tones, as well as acoustic similarity between different L3 tones, play a significant role at this early stage of L3 tone acquisition.Published versio

    The three-way relationship of polymorphisms of porcine genes encoding terminal complement components, their differential expression, and health-related phenotypes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The complement system is an evolutionary ancient mechanism that plays an essential role in innate immunity and contributes to the acquired immune response. Three modes of activation, known as classical, alternative and lectin pathway, lead to the initiation of a common terminal lytic pathway. The terminal complement components (TCCs: C6, C7, C8A, C8B, and C9) are encoded by the genes <it>C6</it>, <it>C7</it>, <it>C8A</it>, <it>C8B</it>, <it>C8G</it>, and <it>C9</it>. We aimed at experimentally testing the porcine genes encoding TCCs as candidate genes for immune competence and disease resistance by addressing the three-way relationship of genotype, health related phenotype, and mRNA expression.</p> <p>Results</p> <p>Comparative sequencing of cDNAs of animals of the breeds German Landrace, Piétrain, Hampshire, Duroc, Vietnamese Potbelly Pig, and Berlin Miniature Pig (BMP) revealed 30 SNPs (21 in protein domains, 12 with AA exchange). The promoter regions (each ~1.5 kb upstream the transcription start sites) of <it>C6</it>, <it>C7</it>, <it>C8A</it>, <it>C8G</it>, and <it>C9</it> exhibited 29 SNPs. Significant effects of the TCC encoding genes on hemolytic complement activity were shown in a cross of Duroc and BMP after vaccination against Mycoplasma hyopneumoniae, Aujeszky disease virus and PRRSV by analysis of variance using repeated measures mixed models. Family based association tests (FBAT) confirmed the associations. The promoter SNPs were associated with the relative abundance of TCC transcripts obtained by real time RT-PCR of 311 liver samples of commercial slaughter pigs. Complement gene expression showed significant relationship with the prevalence of acute and chronic lung lesions.</p> <p>Conclusions</p> <p>The analyses point to considerable variation of the porcine TCC genes and promote the genes as candidate genes for disease resistance.</p

    Small-scale (flash) flood early warning in the light of operational requirements: opportunities and limits with regard to user demands, driving data, and hydrologic modeling techniques

    Get PDF
    In recent years, the Free State of Saxony (Eastern Germany) was repeatedly hit by both extensive riverine flooding, as well as flash flood events, emerging foremost from convective heavy rainfall. Especially after a couple of small-scale, yet disastrous events in 2010, preconditions, drivers, and methods for deriving flash flood related early warning products are investigated. This is to clarify the feasibility and the limits of envisaged early warning procedures for small catchments, hit by flashy heavy rain events. Early warning about potentially flash flood prone situations (i.e., with a suitable lead time with regard to required reaction-time needs of the stakeholders involved in flood risk management) needs to take into account not only hydrological, but also meteorological, as well as communication issues. Therefore, we propose a threefold methodology to identify potential benefits and limitations in a real-world warning/reaction context. First, the user demands (with respect to desired/required warning products, preparation times, etc.) are investigated. Second, focusing on small catchments of some hundred square kilometers, two quantitative precipitation forecasts are verified. Third, considering the user needs, as well as the input parameter uncertainty (i.e., foremost emerging from an uncertain QPF), a feasible, yet robust hydrological modeling approach is proposed on the basis of pilot studies, employing deterministic, data-driven, and simple scoring methods

    The ‘Galilean Style in Science’ and the Inconsistency of Linguistic Theorising

    Get PDF
    Chomsky’s principle of epistemological tolerance says that in theoretical linguistics contradictions between the data and the hypotheses may be temporarily tolerated in order to protect the explanatory power of the theory. The paper raises the following problem: What kinds of contradictions may be tolerated between the data and the hypotheses in theoretical linguistics? First a model of paraconsistent logic is introduced which differentiates between week and strong contradiction. As a second step, a case study is carried out which exemplifies that the principle of epistemological tolerance may be interpreted as the tolerance of week contradiction. The third step of the argumentation focuses on another case study which exemplifies that the principle of epistemological tolerance must not be interpreted as the tolerance of strong contradiction. The reason for the latter insight is the unreliability and the uncertainty of introspective data. From this finding the author draws the conclusion that it is the integration of different data types that may lead to the improvement of current theoretical linguistics and that the integration of different data types requires a novel methodology which, for the time being, is not available

    The ‘Galilean Style in Science’ and the Inconsistency of Linguistic Theorising

    Get PDF
    Chomsky’s principle of epistemological tolerance says that in theoretical linguistics contradictions between the data and the hypotheses may be temporarily tolerated in order to protect the explanatory power of the theory. The paper raises the following problem: What kinds of contradictions may be tolerated between the data and the hypotheses in theoretical linguistics? First a model of paraconsistent logic is introduced which differentiates between week and strong contradiction. As a second step, a case study is carried out which exemplifies that the principle of epistemological tolerance may be interpreted as the tolerance of week contradiction. The third step of the argumentation focuses on another case study which exemplifies that the principle of epistemological tolerance must not be interpreted as the tolerance of strong contradiction. The reason for the latter insight is the unreliability and the uncertainty of introspective data. From this finding the author draws the conclusion that it is the integration of different data types that may lead to the improvement of current theoretical linguistics and that the integration of different data types requires a novel methodology which, for the time being, is not available

    Probing the SELEX Process with Next-Generation Sequencing

    Get PDF
    Background SELEX is an iterative process in which highly diverse synthetic nucleic acid libraries are selected over many rounds to finally identify aptamers with desired properties. However, little is understood as how binders are enriched during the selection course. Next-generation sequencing offers the opportunity to open the black box and observe a large part of the population dynamics during the selection process. Methodology We have performed a semi-automated SELEX procedure on the model target streptavidin starting with a synthetic DNA oligonucleotide library and compared results obtained by the conventional analysis via cloning and Sanger sequencing with next-generation sequencing. In order to follow the population dynamics during the selection, pools from all selection rounds were barcoded and sequenced in parallel. Conclusions High affinity aptamers can be readily identified simply by copy number enrichment in the first selection rounds. Based on our results, we suggest a new selection scheme that avoids a high number of iterative selection rounds while reducing time, PCR bias, and artifacts

    A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory

    Get PDF
    Amazon’s Mechanical Turk (AMT) is a Web application that provides instant access to thousands of potential participants for survey-based psychology experiments, such as the acceptability judgment task used extensively in syntactic theory. Because AMT is a Web-based system, syntacticians may worry that the move out of the experimenter-controlled environment of the laboratory and onto the user-controlled environment of AMT could adversely affect the quality of the judgment data collected. This article reports a quantitative comparison of two identical acceptability judgment experiments, each with 176 participants (352 total): one conducted in the laboratory, and one conducted on AMT. Crucial indicators of data quality—such as participant rejection rates, statistical power, and the shape of the distributions of the judgments for each sentence type—are compared between the two samples. The results suggest that aside from slightly higher participant rejection rates, AMT data are almost indistinguishable from laboratory data

    Collocation analysis for UMLS knowledge-based word sense disambiguation

    Get PDF
    BACKGROUND: The effectiveness of knowledge-based word sense disambiguation (WSD) approaches depends in part on the information available in the reference knowledge resource. Off the shelf, these resources are not optimized for WSD and might lack terms to model the context properly. In addition, they might include noisy terms which contribute to false positives in the disambiguation results. METHODS: We analyzed some collocation types which could improve the performance of knowledge-based disambiguation methods. Collocations are obtained by extracting candidate collocations from MEDLINE and then assigning them to one of the senses of an ambiguous word. We performed this assignment either using semantic group profiles or a knowledge-based disambiguation method. In addition to collocations, we used second-order features from a previously implemented approach.Specifically, we measured the effect of these collocations in two knowledge-based WSD methods. The first method, AEC, uses the knowledge from the UMLS to collect examples from MEDLINE which are used to train a NaĂŻve Bayes approach. The second method, MRD, builds a profile for each candidate sense based on the UMLS and compares the profile to the context of the ambiguous word.We have used two WSD test sets which contain disambiguation cases which are mapped to UMLS concepts. The first one, the NLM WSD set, was developed manually by several domain experts and contains words with high frequency occurrence in MEDLINE. The second one, the MSH WSD set, was developed automatically using the MeSH indexing in MEDLINE. It contains a larger set of words and covers a larger number of UMLS semantic types. RESULTS: The results indicate an improvement after the use of collocations, although the approaches have different performance depending on the data set. In the NLM WSD set, the improvement is larger for the MRD disambiguation method using second-order features. Assignment of collocations to a candidate sense based on UMLS semantic group profiles is more effective in the AEC method.In the MSH WSD set, the increment in performance is modest for all the methods. Collocations combined with the MRD disambiguation method have the best performance. The MRD disambiguation method and second-order features provide an insignificant change in performance. The AEC disambiguation method gives a modest improvement in performance. Assignment of collocations to a candidate sense based on knowledge-based methods has better performance. CONCLUSIONS: Collocations improve the performance of knowledge-based disambiguation methods, although results vary depending on the test set and method used. Generally, the AEC method is sensitive to query drift. Using AEC, just a few selected terms provide a large improvement in disambiguation performance. The MRD method handles noisy terms better but requires a larger set of terms to improve performance
    • 

    corecore